1 A well-defined HDX-MS experiment

This vignette describes how to analyse time-resolved differential HDX-MS experiments. The key elements are at least two conditions i.e. apo + antibody, apo + small molecule or protein closed + protien open, etc. The experiment can be replicated, though if there are sufficient time points analysed (>=3) then occasionally signficant results can be obtained. The data provided should be centroid-centric data. This package does not yet support analysis straight from raw spectra. Typically this will be provided as a .csv from tools such as dynamiX or HDExaminer.

2 Main elements of the package

The package relies of Bioconductor infrastructure so that it integrates with other data types and can benefit from advantages in other fields of mass-spectrometry. There are package specific object, classes and methods but importantly there is reuse of classes found in quantitative proteomics data, mainly the QFeatures object which extends the SummarisedExperiment class for mass spectrometry data. The focus of this package is on testing and visualisation of the testing results.

3 Data

We will begin with a structural variant experiment in which MHP and a structural variant were mixed in different proportions. HDX-MS was performed on these samples and we expect to see reproducible but subtle differences. We first load the data from the package and it is .csv format.

csv_filename <- "MBP.csv"
csv_filepath <- system.file("extdata", csv_filename, package = "hdxstats")

We can now read in the .csv file and have a quick look at the .csv.

data <- read.csv(csv_filepath)
head(data) # have a look
##   hx_sample pep_start pep_end pep_sequence pep_charge     d confidence  score
## 1       10%        19      30 VIWINGDKGYNG          2 2.120     medium 0.8686
## 2       10%        19      30 VIWINGDKGYNG          2 2.146     medium 0.8173
## 3       10%        19      30 VIWINGDKGYNG          2 2.143     medium 0.8839
## 4       10%        19      30 VIWINGDKGYNG          2    NA       <NA>     NA
## 5       10%        19      30 VIWINGDKGYNG          2    NA       <NA>     NA
## 6       10%        19      30 VIWINGDKGYNG          2    NA       <NA>     NA
##   hx_time time_unit replicate_cnt
## 1      30         s             1
## 2      30         s             2
## 3      30         s             3
## 4      30         s             4
## 5      30         s             5
## 6      30         s             6
length(unique(data$pep_sequence)) # peptide sequences
## [1] 115

Let us have a quick visualisation of some the data so that we can see some of the features

condition1 <- data$pep_sequence == unique(data$pep_sequence[1])
condition2 <- data$pep_charge == 2
data_filtered <- filter(data, condition1, condition2)

selection <- c(7,5,1,2,3,4,6)
colors <- factor(data_filtered$hx_sample, unique(data$hx_sample)[selection])
colormap <- "Set2"
point_size <- 2

# make a pretty plot
data_filtered %>%
    ggplot(aes(x = hx_time, y = d, group = factor(replicate_cnt), color = colors)) + 
    # further customise plot
    theme_classic() +
    geom_point(size = point_size) + 
    scale_color_manual(values = brewer.pal(n = 7, name = colormap)) + 
    labs(color = "experiment", 
         x = "Deuterium Exposure", 
         y = "Deuterium incoperation"
         )
## Warning: Removed 96 rows containing missing values (geom_point).

We can see that the units of the time dimension are in seconds and that Deuterium incorporation has been normalized into Daltons (Da).

4 Parsing to an object of class QFeatures

Working from a .csv is likely to cause issues downstream. Indeed, we run the risk of accidentally changing the data or corrupting the file in some way. Secondly, all .csv file will be formatted slightly different and so making extensible tools for these files will be inefficient. Furthermore, working with a generic class used in other mass-spectrometry fields can speed up analysis and adoption of new methods. We will work with the class QFeatures from the QFeatures class as it is a powerful and scalable way to store quantitative mass-spectrometry data.

Firstly, the data is stored in long format rather than wide format. We first switch the data to wide format.

columns_names <- c("hx_time", "replicate_cnt", "hx_sample") # to merge into new column names
columns_fixed <- c("pep_sequence", "pep_charge")

# transform data
data_wide <- pivot_wider(data.frame(data),
                        values_from = d,
                        names_from = columns_names,
                        id_cols = columns_fixed
                        )
## Note: Using an external vector in selections is ambiguous.
## ℹ Use `all_of(columns_names)` instead of `columns_names` to silence this message.
## ℹ See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
## This message is displayed once per session.
## Note: Using an external vector in selections is ambiguous.
## ℹ Use `all_of(columns_fixed)` instead of `columns_fixed` to silence this message.
## ℹ See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
## This message is displayed once per session.
# check how the data looks like now
head(data_wide)
## # A tibble: 6 × 198
##   pep_sequence pep_charge `30_1_10%` `30_2_10%` `30_3_10%` `30_4_10%` `30_5_10%`
##   <chr>             <int>      <dbl>      <dbl>      <dbl>      <dbl>      <dbl>
## 1 VIWINGDKGYNG          2      2.12       2.15       2.14          NA         NA
## 2 VIWINGDKGYN…          2      2.12       2.12       2.11          NA         NA
## 3 GDKGYNGLAEVG          3      0.552      0.555      0.553         NA         NA
## 4 LAEVGKKFEKD…          4      2.41       2.36       2.42          NA         NA
## 5 AEVGKKFEKDT…          4      0.458      0.425      0.573         NA         NA
## 6 TVEHPDKL              3      1.43       1.44       1.44          NA         NA
## # … with 191 more variables: `30_6_10%` <dbl>, `30_7_10%` <dbl>,
## #   `240_1_10%` <dbl>, `240_2_10%` <dbl>, `240_3_10%` <dbl>, `240_4_10%` <dbl>,
## #   `240_5_10%` <dbl>, `240_6_10%` <dbl>, `240_7_10%` <dbl>,
## #   `1800_1_10%` <dbl>, `1800_2_10%` <dbl>, `1800_3_10%` <dbl>,
## #   `1800_4_10%` <dbl>, `1800_5_10%` <dbl>, `1800_6_10%` <dbl>,
## #   `1800_7_10%` <dbl>, `14400_1_10%` <dbl>, `14400_2_10%` <dbl>,
## #   `14400_3_10%` <dbl>, `14400_4_10%` <dbl>, `14400_5_10%` <dbl>, …

Next, we notice that there are many columns with NA values. The follow code chunk removes these columns.

data_wide <- data_wide[, colSums(is.na(data_wide)) != nrow(data_wide)]

# check how the data looks like now
head(data_wide)
## # A tibble: 6 × 102
##   pep_sequence           pep_charge `30_1_10%` `30_2_10%` `30_3_10%` `240_1_10%`
##   <chr>                       <int>      <dbl>      <dbl>      <dbl>       <dbl>
## 1 VIWINGDKGYNG                    2      2.12       2.15       2.14        3.69 
## 2 VIWINGDKGYNGL                   2      2.12       2.12       2.11        3.62 
## 3 GDKGYNGLAEVG                    3      0.552      0.555      0.553       0.725
## 4 LAEVGKKFEKDTGIKVTVEHP…          4      2.41       2.36       2.42        3.23 
## 5 AEVGKKFEKDTGIKV                 4      0.458      0.425      0.573       0.843
## 6 TVEHPDKL                        3      1.43       1.44       1.44        1.79 
## # … with 96 more variables: `240_2_10%` <dbl>, `240_3_10%` <dbl>,
## #   `1800_1_10%` <dbl>, `1800_2_10%` <dbl>, `1800_3_10%` <dbl>,
## #   `14400_1_10%` <dbl>, `14400_2_10%` <dbl>, `14400_3_10%` <dbl>,
## #   `30_1_15%` <dbl>, `30_2_15%` <dbl>, `30_3_15%` <dbl>, `240_1_15%` <dbl>,
## #   `240_2_15%` <dbl>, `240_3_15%` <dbl>, `1800_1_15%` <dbl>,
## #   `1800_2_15%` <dbl>, `1800_3_15%` <dbl>, `14400_1_15%` <dbl>,
## #   `14400_2_15%` <dbl>, `14400_3_15%` <dbl>, `30_1_20%` <dbl>, …

We can see that the number of columns has changed from 198 to 102 only.

Now, we note that the column names are not very informative. We are going to format these in a very specific way so that later functions can automatically infer the design from the column names. We impose the following format X(time) rep(replicate) cond(condition)

columns_to_remove <- c(1,2) # subtract names from first two columns
old_columns_names <- colnames(data_wide)[-columns_to_remove]  

# add X and rep markers to new column names
new_object.colnames <- paste0("X", old_columns_names)
new_object.colnames <- gsub("0_", "0rep", new_object.colnames)
new_object.colnames <- gsub("_", "cond", new_object.colnames)
# remove annoying % signs
new_object.colnames <- gsub("%", "", new_object.colnames)
# remove space (NULL could get confusing later and WT is clear)
new_object.colnames <- gsub(" .*", "", new_object.colnames)

# Compare old and new column names
old_columns_names
##   [1] "30_1_10%"        "30_2_10%"        "30_3_10%"        "240_1_10%"      
##   [5] "240_2_10%"       "240_3_10%"       "1800_1_10%"      "1800_2_10%"     
##   [9] "1800_3_10%"      "14400_1_10%"     "14400_2_10%"     "14400_3_10%"    
##  [13] "30_1_15%"        "30_2_15%"        "30_3_15%"        "240_1_15%"      
##  [17] "240_2_15%"       "240_3_15%"       "1800_1_15%"      "1800_2_15%"     
##  [21] "1800_3_15%"      "14400_1_15%"     "14400_2_15%"     "14400_3_15%"    
##  [25] "30_1_20%"        "30_2_20%"        "30_3_20%"        "240_1_20%"      
##  [29] "240_2_20%"       "240_3_20%"       "1800_1_20%"      "1800_2_20%"     
##  [33] "1800_3_20%"      "14400_1_20%"     "14400_2_20%"     "14400_3_20%"    
##  [37] "30_1_25%"        "30_2_25%"        "30_3_25%"        "240_1_25%"      
##  [41] "240_2_25%"       "240_3_25%"       "1800_1_25%"      "1800_2_25%"     
##  [45] "1800_3_25%"      "14400_1_25%"     "14400_2_25%"     "14400_3_25%"    
##  [49] "30_1_5%"         "30_2_5%"         "30_3_5%"         "240_1_5%"       
##  [53] "240_2_5%"        "240_3_5%"        "1800_1_5%"       "1800_2_5%"      
##  [57] "1800_3_5%"       "14400_1_5%"      "14400_2_5%"      "14400_3_5%"     
##  [61] "30_1_W169G"      "30_2_W169G"      "30_3_W169G"      "240_1_W169G"    
##  [65] "240_2_W169G"     "240_3_W169G"     "1800_1_W169G"    "1800_2_W169G"   
##  [69] "1800_3_W169G"    "14400_1_W169G"   "14400_2_W169G"   "14400_3_W169G"  
##  [73] "30_1_WT Null"    "30_2_WT Null"    "30_3_WT Null"    "30_4_WT Null"   
##  [77] "30_5_WT Null"    "30_6_WT Null"    "30_7_WT Null"    "240_1_WT Null"  
##  [81] "240_2_WT Null"   "240_3_WT Null"   "240_4_WT Null"   "240_5_WT Null"  
##  [85] "240_6_WT Null"   "240_7_WT Null"   "1800_1_WT Null"  "1800_2_WT Null" 
##  [89] "1800_3_WT Null"  "1800_4_WT Null"  "1800_5_WT Null"  "1800_6_WT Null" 
##  [93] "1800_7_WT Null"  "14400_1_WT Null" "14400_2_WT Null" "14400_3_WT Null"
##  [97] "14400_4_WT Null" "14400_5_WT Null" "14400_6_WT Null" "14400_7_WT Null"
new_object.colnames
##   [1] "X30rep1cond10"       "X30rep2cond10"       "X30rep3cond10"      
##   [4] "X240rep1cond10"      "X240rep2cond10"      "X240rep3cond10"     
##   [7] "X1800rep1cond10"     "X1800rep2cond10"     "X1800rep3cond10"    
##  [10] "X14400rep1cond10"    "X14400rep2cond10"    "X14400rep3cond10"   
##  [13] "X30rep1cond15"       "X30rep2cond15"       "X30rep3cond15"      
##  [16] "X240rep1cond15"      "X240rep2cond15"      "X240rep3cond15"     
##  [19] "X1800rep1cond15"     "X1800rep2cond15"     "X1800rep3cond15"    
##  [22] "X14400rep1cond15"    "X14400rep2cond15"    "X14400rep3cond15"   
##  [25] "X30rep1cond20"       "X30rep2cond20"       "X30rep3cond20"      
##  [28] "X240rep1cond20"      "X240rep2cond20"      "X240rep3cond20"     
##  [31] "X1800rep1cond20"     "X1800rep2cond20"     "X1800rep3cond20"    
##  [34] "X14400rep1cond20"    "X14400rep2cond20"    "X14400rep3cond20"   
##  [37] "X30rep1cond25"       "X30rep2cond25"       "X30rep3cond25"      
##  [40] "X240rep1cond25"      "X240rep2cond25"      "X240rep3cond25"     
##  [43] "X1800rep1cond25"     "X1800rep2cond25"     "X1800rep3cond25"    
##  [46] "X14400rep1cond25"    "X14400rep2cond25"    "X14400rep3cond25"   
##  [49] "X30rep1cond5"        "X30rep2cond5"        "X30rep3cond5"       
##  [52] "X240rep1cond5"       "X240rep2cond5"       "X240rep3cond5"      
##  [55] "X1800rep1cond5"      "X1800rep2cond5"      "X1800rep3cond5"     
##  [58] "X14400rep1cond5"     "X14400rep2cond5"     "X14400rep3cond5"    
##  [61] "X30rep1condW169G"    "X30rep2condW169G"    "X30rep3condW169G"   
##  [64] "X240rep1condW169G"   "X240rep2condW169G"   "X240rep3condW169G"  
##  [67] "X1800rep1condW169G"  "X1800rep2condW169G"  "X1800rep3condW169G" 
##  [70] "X14400rep1condW169G" "X14400rep2condW169G" "X14400rep3condW169G"
##  [73] "X30rep1condWT"       "X30rep2condWT"       "X30rep3condWT"      
##  [76] "X30rep4condWT"       "X30rep5condWT"       "X30rep6condWT"      
##  [79] "X30rep7condWT"       "X240rep1condWT"      "X240rep2condWT"     
##  [82] "X240rep3condWT"      "X240rep4condWT"      "X240rep5condWT"     
##  [85] "X240rep6condWT"      "X240rep7condWT"      "X1800rep1condWT"    
##  [88] "X1800rep2condWT"     "X1800rep3condWT"     "X1800rep4condWT"    
##  [91] "X1800rep5condWT"     "X1800rep6condWT"     "X1800rep7condWT"    
##  [94] "X14400rep1condWT"    "X14400rep2condWT"    "X14400rep3condWT"   
##  [97] "X14400rep4condWT"    "X14400rep5condWT"    "X14400rep6condWT"   
## [100] "X14400rep7condWT"

We will now parse the data into an object of class QFeatures, we have provided a function to assist with this in the package. If you want to do this yourself use the readQFeatures function from the QFeatures package.

# column range corresponding to numeric data
initial_column <- 3
last_column <- 102

# hdxstats method
data_qDF <- parseDeutData(object = DataFrame(data_wide),
                          design = new_object.colnames,
                          quantcol = initial_column:last_column
                          )

We save this object for future use in the vignettes ahead

saveRDS(data_qDF, file='data/MBPqDF.rsd')

5 Heatmap visualisations of HDX data

To help us get used to the QFeatures we show how to generate a heatmap of these data from this object:

data_qDF_transposed <- t(assay(data_qDF))

# parameters to customise heatmap plot
colors_discrete <- brewer.pal(n = 9, name = "BuPu") # colours from discrete colormap (BuPu)
legend_labels <- c("0", "2", "4", "6", "8","10", "12", "Incorporation")
legend_breaks <- c(0, 2, 4, 6, 8, 10, 12, max(assay(data_qDF)))
title <- "Structural Variant Deuterium Incorporation"

pheatmap(data_qDF_transposed,
         cluster_rows = FALSE, 
         cluster_cols = FALSE,
         color = colors_discrete,
         main = title, 
         fontsize = 14,
         legend_breaks = legend_breaks,
         legend_labels = legend_labels
         )

If you prefer to have the start-to-end residue numbers in the heatmap instead you can change the plot as follows

data_qDF_transposed <- t(assay(data_qDF))

# parameters to customise heatmap plot
colors_discrete <- brewer.pal(n = 9, name = "BuPu") # colours from discrete colormap (BuPu)
legend_labels <- c("0", "2", "4", "6", "8","10", "12", "Incorporation")
legend_breaks <- c(0, 2, 4, 6, 8, 10, 12, max(assay(data_qDF)))
title <- "Structural Variant Deuterium Incorporation"

# take start-to-end residue numbers
regions <- unique(data[,c("pep_start", "pep_end")])
column_labels_new <- paste0("[", regions[,1], ",", regions[,2], "]")

pheatmap(data_qDF_transposed,
         cluster_rows = FALSE, 
         cluster_cols = FALSE,
         color = colors_discrete,
         main = title, 
         fontsize = 14,
         legend_breaks = legend_breaks,
         legend_labels = legend_labels,
         labels_col = column_labels_new
         )

It maybe useful to normalise HDX-MS data for either interpretation or visualization purposes. We can normalize by the percentage (pc) of exchangeable amides or by using back-exchange correction values (bc). We first use percentage incorporation as normalisation and visualise as a heatmap.

# normalised data with hdxstats method
data_qDF_normalised <- normalisehdx(data_qDF,
                                    sequence = unique(data$pep_sequence),
                                    method = "pc")

data_qDF_transposed <- t(assay(data_qDF_normalised))

# parameters to customise heatmap plot
colors_discrete <- brewer.pal(n = 9, name = "BuPu") # colours from discrete colormap (BuPu)
legend_labels_new <- c("0", "0.2", "0.4", "0.6", "0.8","1", "Incorporation")
legend_breaks_new <- c(0, 0.2, 0.4, 0.6, 0.8, 1, 1.2)
title_new <- "Structural Variant Deuterium Incorporation: Normalised by Percentage Incorporation"

# take start-to-end residue numbers
regions <- unique(data[,c("pep_start", "pep_end")])
column_labels_new <- paste0("[", regions[,1], ",", regions[,2], "]")

pheatmap(data_qDF_transposed,
         cluster_rows = FALSE, 
         cluster_cols = FALSE,
         color = colors_discrete,
         main = title_new, 
         fontsize = 14,
         legend_breaks = legend_breaks_new,
         legend_labels = legend_labels_new,
         labels_col = column_labels_new
         )

Now, we demonstrate a back-exchange correction (bc) calculation. The back-exchange value are fictitious by the code chunk below demonstrates how to set this up.

# made-up correction factor using hdxstats method: exchangeableAmides
correction <- (exchangeableAmides(unique(data$pep_sequence)) + 1) * 0.9

data_qDF_normalised <- normalisehdx(data_qDF,
                                    sequence = unique(data$pep_sequence),
                                    method = "bc", 
                                    correction = correction
                                    )

data_qDF_normalised <- t(assay(data_qDF_normalised))

# parameters to customise heatmap plot
colors_discrete <- brewer.pal(n = 9, name = "BuPu") # colours from discrete colormap (BuPu)
legend_labels_new <- c("0", "0.2", "0.4", "0.6", "0.8","1", "Incorporation")
legend_breaks_new <- c(0, 0.2, 0.4, 0.6, 0.8, 1, 1.2)
title_new <- "Structural Variant Deuterium Incorporation: Normalised by Back-exchange Correction"

# take start-to-end residue numbers
regions <- unique(data[,c("pep_start", "pep_end")])
column_labels_new <- paste0("[", regions[,1], ",", regions[,2], "]")

pheatmap(data_qDF_transposed,
         cluster_rows = FALSE, 
         cluster_cols = FALSE,
         color = colors_discrete,
         main = title_new, 
         fontsize = 14,
         legend_breaks = legend_breaks_new,
         legend_labels = legend_labels_new,
         labels_col = column_labels_new
         )

6 Functional data analysis of HDX-MS data

The hdxstats package uses an empirical Bayes functional approach to analyse the data. We illustrate this idea in steps so that we can get an idea of the approach.

First, we fit a parametric model to the data restricted to a single randomly chosen peptide. We explore the HdxStatModel class objects and their attributes.

all_peptides <- rownames(data_qDF)[[1]] # get all peptides
selected_peptide <- all_peptides[37] # randomly picked

fitting_method <- differentialUptakeKinetics # model for fitting
starting_parameters <- list(a = NULL, b = 0.0001,  d = NULL, p = 1) # initial model parameter guesses

# HdxStatModel class object
result <- fitting_method(object = data_qDF[,1:100], #provide a QFeature object
                         feature = selected_peptide, # which peptide to do we fit
                         start = starting_parameters
                         ) 

Here, we see the HdxStatModel class, and that a Functional Model was applied to the data and a total of 7 models were fitted, for each available experimental condition.

result
## Object of class "HdxStatModel"
## Method: Functional Model 
## Fitted 7

The result object provides several slots providing several pieces of informat at once. For instance, the nullmodel and alternative slots provide the underlying fitted models. The method and formula slots provide vital information about what analysis was performed. The vis slot provides a ggplot object so that we can visualise the functional fits.

Let’s play now with the visualisation slot!

result@vis

Since this is a ggplot object, we can customise it in the usual grammatical ways.

colors <- brewer.pal(n = 8, name = "Set2")

result@vis + scale_color_manual(values = colors)
## Scale for 'colour' is already present. Adding another scale for 'colour',
## which will replace the existing scale.

A number of standard methods are available and can be applied to a HdxStatModels object, these extend the usual base stats methods. These include

  1. anova: An analysis of variance
  2. logLik: The log-likelihood of all the fitted models
  3. residuals: The residuals for the fitted models
  4. vcov: The variance-covariance matrix between parameters of the models
  5. likRatio: The likelihood ratio between null and alternative models
  6. wilk: Applies wilk’s theorem to obtain a p-value from the likelihood ratio
  7. coef: The fitted model coefficients
  8. deviance: The deviance of the fitted models
  9. summary: The statistical summary of the models.

Let’s tests these now.

anova(result)
## Analysis of Variance Table
## 
## Model 1: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## Model 2: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## Model 3: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## Model 4: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## Model 5: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## Model 6: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## Model 7: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## Model 8: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
##   Res.Df Res.Sum Sq  Df  Sum Sq F value    Pr(>F)    
## 1     96    14.0906                                  
## 2      8     0.0786  88 14.0120 16.2033 0.0001427 ***
## 3      8     0.0964   0  0.0000                      
## 4      8     0.0619   0  0.0000                      
## 5      8     0.0435   0  0.0000                      
## 6      8     0.0889   0  0.0000                      
## 7      8     0.1386   0  0.0000                      
## 8     24     0.1984 -16 -0.0598  0.2157 0.9955664    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
logLik(result)
##       null       alt1       alt2       alt3       alt4       alt5       alt6 
## -43.910628  13.141365  11.918545  14.576956  16.697584  12.405731   9.739545 
##       alt7 
##  29.565864
residuals(result)
## $null
##   [1] -0.108719440 -0.204719440 -0.120719440 -0.046192595 -0.072192595
##   [6] -0.010192595 -0.265773867 -0.231773867 -0.303773867 -0.046100142
##  [11] -0.131100142 -0.108100142 -0.070719440 -0.097719440 -0.183719440
##  [16] -0.012192595  0.020807405  0.020807405 -0.275773867 -0.159773867
##  [21] -0.194773867  0.034899858 -0.013100142  0.061899858 -0.015719440
##  [26] -0.105719440 -0.014719440  0.031807405  0.017807405  0.049807405
##  [31] -0.171773867 -0.134773867 -0.188773867  0.004899858  0.067899858
##  [36]  0.000899858 -0.024719440 -0.075719440 -0.139719440  0.001807405
##  [41]  0.033807405  0.056807405 -0.085773867 -0.101773867 -0.084773867
##  [46] -0.063100142 -0.012100142  0.036899858 -0.113719440 -0.101719440
##  [51] -0.104719440 -0.037192595 -0.069192595 -0.025192595 -0.223773867
##  [56] -0.273773867 -0.375773867 -0.084100142  0.006899858 -0.065100142
##  [61]  0.390280560  0.447280560  0.426280560  1.201807405  1.248807405
##  [66]  1.188807405  1.308226133  1.226226133  1.310226133  0.678899858
##  [71]  0.675899858  0.661899858 -0.168719440 -0.124719440 -0.119719440
##  [76] -0.123719440 -0.110719440 -0.235719440 -0.179719440 -0.119192595
##  [81] -0.201192595 -0.149192595 -0.142192595 -0.159192595 -0.124192595
##  [86] -0.197192595 -0.471773867 -0.452773867 -0.382773867 -0.405773867
##  [91] -0.480773867 -0.420773867 -0.423773867 -0.214100142 -0.195100142
##  [96] -0.126100142 -0.106100142 -0.135100142 -0.190100142 -0.095100142
## attr(,"label")
## [1] "Residuals"
## 
## $alt1
##  [1] -0.009322826 -0.105322826 -0.021322826  0.106287555  0.080287555
##  [6]  0.142287555 -0.082504992 -0.048504992 -0.120504992  0.070939089
## [11] -0.014060911  0.008939089
## attr(,"label")
## [1] "Residuals"
## 
## $alt2
##  [1] -0.003595025 -0.030595025 -0.116595025  0.099389796  0.132389796
##  [6]  0.132389796 -0.157222349 -0.041222349 -0.076222349  0.031468644
## [11] -0.016531356  0.058468644
## attr(,"label")
## [1] "Residuals"
## 
## $alt3
##  [1] -1.088794e-02 -1.008879e-01 -9.887937e-03  9.663941e-02  8.263941e-02
##  [6]  1.146394e-01 -8.121607e-02 -4.421607e-02 -9.821607e-02 -1.656477e-05
## [11]  6.298344e-02 -4.016565e-03
## attr(,"label")
## [1] "Residuals"
## 
## $alt4
##  [1]  0.02236965 -0.02863035 -0.09263035  0.04539808  0.07739808  0.10039808
##  [7] -0.05170589 -0.06770589 -0.05070589 -0.03619672  0.01480328  0.06380328
## attr(,"label")
## [1] "Residuals"
## 
## $alt5
##  [1] -0.05599604 -0.04399604 -0.04699604  0.12262958  0.09062958  0.13462958
##  [7] -0.01849799 -0.06849799 -0.17049799 -0.01430676  0.07669324  0.00469324
## attr(,"label")
## [1] "Residuals"
## 
## $alt6
##  [1] -0.10759976 -0.05059976 -0.07159976  0.14770160  0.19470160  0.13470160
##  [7] -0.08134445 -0.16334445 -0.07934445  0.03046397  0.02746397  0.01346397
## attr(,"label")
## [1] "Residuals"
## 
## $alt7
##  [1] -0.065650886 -0.021650886 -0.016650886 -0.020650886 -0.007650886
##  [6] -0.132650886 -0.076650886  0.151635329  0.069635329  0.121635329
## [11]  0.128635329  0.111635329  0.146635329  0.073635329 -0.118714142
## [16] -0.099714142 -0.029714142 -0.052714142 -0.127714142 -0.067714142
## [21] -0.070714142 -0.039294236 -0.020294236  0.048705764  0.068705764
## [26]  0.039705764 -0.015294236  0.079705764
## attr(,"label")
## [1] "Residuals"
vcov(result)
## $nullvcov
##             a            b            d            p
## a 51844.89228 -40.29962624 -564.4349190 -55.77486410
## b   -40.29963   0.03142558    0.4338027   0.04311901
## d  -564.43492   0.43380273    6.3929849   0.61871461
## p   -55.77486   0.04311901    0.6187146   0.06055953
## 
## $altvcov
## $altvcov[[1]]
##               a             b             d             p
## a 10431222.6621 -702.30690984 -6009.4739114 -672.57006167
## b     -702.3069    0.04728794     0.4039575    0.04524857
## d    -6009.4739    0.40395750     3.5848487    0.39373262
## p     -672.5701    0.04524857     0.3937326    0.04369795
## 
## $altvcov[[2]]
##               a             b             d             p
## a 19357101.3875 -1.007882e+03 -8696.1425609 -991.76081687
## b    -1007.8817  5.248124e-02     0.4521383    0.05160415
## d    -8696.1426  4.521383e-01     4.0485033    0.45288236
## p     -991.7608  5.160415e-02     0.4528824    0.05120894
## 
## $altvcov[[3]]
##              a             b             d             p
## a 4552740.6012 -368.82485376 -3266.9390972 -383.37329769
## b    -368.8249    0.02988215     0.2641473    0.03102956
## d   -3266.9391    0.26414732     2.4323079    0.27978712
## p    -383.3733    0.03102956     0.2797871    0.03254357
## 
## $altvcov[[4]]
##             a            b            d             p
## a 6864.613177 -6.441993822 -106.1120514 -10.722910959
## b   -6.441994  0.006083797    0.0979416   0.009982823
## d -106.112051  0.097941600    1.7115199   0.169127175
## p  -10.722911  0.009982823    0.1691272   0.016917384
## 
## $altvcov[[5]]
##               a             b             d             p
## a 13384078.4830 -647.23678733 -5679.4579995 -770.23962725
## b     -647.2368    0.03130170     0.2741913    0.03721898
## d    -5679.4580    0.27419125     2.5105305    0.33292473
## p     -770.2396    0.03721898     0.3329247    0.04471290
## 
## $altvcov[[6]]
##            a            b           d            p
## a  2.9829643  0.135178667 -2.15569687 -0.142999855
## b  0.1351787  0.006239608 -0.09913444 -0.006522981
## d -2.1556969 -0.099134444  1.59005286  0.103073149
## p -0.1429999 -0.006522981  0.10307315  0.006924272
## 
## $altvcov[[7]]
##               a             b             d             p
## a 1793801.54273 -96.727076966 -902.85160913 -1.522000e+02
## b     -96.72708   0.005216407    0.04856954  8.198266e-03
## d    -902.85161   0.048569544    0.47728695  7.828112e-02
## p    -152.20003   0.008198266    0.07828112  1.304449e-02
likRatio(result)
##    logLR 
## 303.9124
wilk(result)
##      p-value 
## 2.731481e-50
coef(result)
##               a           b          d         p
## null  32.051086 0.033332155 0.00000000 0.2091503
## alt1 117.088904 0.008313710 0.05396137 0.2067853
## alt2 132.050034 0.007209225 0.10761939 0.2097940
## alt3 103.291695 0.008900141 0.22258216 0.2124511
## alt4  27.362940 0.037597301 0.00000000 0.2150760
## alt5 123.838511 0.006295771 0.37946519 0.2253352
## alt6   8.320486 0.130807205 0.00000000 0.3096142
## alt7 102.154737 0.005857065 0.62040217 0.2477585
deviance(result)
##        null        alt1        alt2        alt3        alt4        alt5 
## 14.09056863  0.07861459  0.09638597  0.06188589  0.04346057  0.08886897 
##        alt6        alt7 
##  0.13859104  0.19839011
summary(result)
## $null
## 
## Formula: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## 
## Parameters:
##    Estimate Std. Error t value Pr(>|t|)
## a  32.05109  227.69473   0.141    0.888
## b   0.03333    0.17727   0.188    0.851
## d   0.00000    2.52844   0.000    1.000
## p   0.20915    0.24609   0.850    0.397
## 
## Residual standard error: 0.3831 on 96 degrees of freedom
## 
## Number of iterations to convergence: 53 
## Achieved convergence tolerance: 1.49e-08
## 
## 
## $alt1
## 
## Formula: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## 
## Parameters:
##    Estimate Std. Error t value Pr(>|t|)
## a 1.171e+02  3.230e+03   0.036    0.972
## b 8.314e-03  2.175e-01   0.038    0.970
## d 5.396e-02  1.893e+00   0.029    0.978
## p 2.068e-01  2.090e-01   0.989    0.352
## 
## Residual standard error: 0.09913 on 8 degrees of freedom
## 
## Number of iterations till stop: 98 
## Achieved convergence tolerance: 1.49e-08
## Reason stopped: Number of calls to `fcn' has reached or exceeded `maxfev' == 500.
## 
## 
## $alt2
## 
## Formula: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## 
## Parameters:
##    Estimate Std. Error t value Pr(>|t|)
## a 1.321e+02  4.400e+03   0.030    0.977
## b 7.209e-03  2.291e-01   0.031    0.976
## d 1.076e-01  2.012e+00   0.053    0.959
## p 2.098e-01  2.263e-01   0.927    0.381
## 
## Residual standard error: 0.1098 on 8 degrees of freedom
## 
## Number of iterations till stop: 98 
## Achieved convergence tolerance: 1.49e-08
## Reason stopped: Number of calls to `fcn' has reached or exceeded `maxfev' == 500.
## 
## 
## $alt3
## 
## Formula: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## 
## Parameters:
##    Estimate Std. Error t value Pr(>|t|)
## a  103.2917  2133.7152   0.048    0.963
## b    0.0089     0.1729   0.051    0.960
## d    0.2226     1.5596   0.143    0.890
## p    0.2125     0.1804   1.178    0.273
## 
## Residual standard error: 0.08795 on 8 degrees of freedom
## 
## Number of iterations till stop: 97 
## Achieved convergence tolerance: 1.49e-08
## Reason stopped: Number of calls to `fcn' has reached or exceeded `maxfev' == 500.
## 
## 
## $alt4
## 
## Formula: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## 
## Parameters:
##   Estimate Std. Error t value Pr(>|t|)
## a  27.3629    82.8530   0.330    0.750
## b   0.0376     0.0780   0.482    0.643
## d   0.0000     1.3082   0.000    1.000
## p   0.2151     0.1301   1.654    0.137
## 
## Residual standard error: 0.07371 on 8 degrees of freedom
## 
## Number of iterations to convergence: 58 
## Achieved convergence tolerance: 1.49e-08
## 
## 
## $alt5
## 
## Formula: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## 
## Parameters:
##    Estimate Std. Error t value Pr(>|t|)
## a 1.238e+02  3.658e+03   0.034    0.974
## b 6.296e-03  1.769e-01   0.036    0.972
## d 3.795e-01  1.584e+00   0.239    0.817
## p 2.253e-01  2.115e-01   1.066    0.318
## 
## Residual standard error: 0.1054 on 8 degrees of freedom
## 
## Number of iterations till stop: 97 
## Achieved convergence tolerance: 1.49e-08
## Reason stopped: Number of calls to `fcn' has reached or exceeded `maxfev' == 500.
## 
## 
## $alt6
## 
## Formula: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## 
## Parameters:
##   Estimate Std. Error t value Pr(>|t|)   
## a  8.32049    1.72713   4.818  0.00133 **
## b  0.13081    0.07899   1.656  0.13632   
## d  0.00000    1.26097   0.000  1.00000   
## p  0.30961    0.08321   3.721  0.00586 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.1316 on 8 degrees of freedom
## 
## Number of iterations to convergence: 38 
## Achieved convergence tolerance: 1.49e-08
## 
## 
## $alt7
## 
## Formula: value ~ a * (1 - exp(-b * (timepoint)^p)) + d
## 
## Parameters:
##    Estimate Std. Error t value Pr(>|t|)  
## a 1.022e+02  1.339e+03   0.076   0.9398  
## b 5.857e-03  7.222e-02   0.081   0.9360  
## d 6.204e-01  6.909e-01   0.898   0.3781  
## p 2.478e-01  1.142e-01   2.169   0.0402 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.09092 on 24 degrees of freedom
## 
## Number of iterations till stop: 95 
## Achieved convergence tolerance: 1.49e-08
## Reason stopped: Number of calls to `fcn' has reached or exceeded `maxfev' == 500.